erotic content
OpenAI will allow verified adults to use ChatGPT to generate erotic content
The company launched a dedicated ChatGPT experience for under-18 users in September. The company launched a dedicated ChatGPT experience for under-18 users in September. New version will allow users to customize AI assistant's personality in what firm calls'treat adults users like adults' policy OpenAI announced plans on Tuesday to relax restrictions on its ChatGPT chatbot, including allowing erotic content for verified adult users as part of what the company calls a "treat adult users like adults" principle. OpenAI's plan includes the release of an updated version of ChatGPT that will allow users to customize their AI assistant's personality, including options for more human-like responses, heavy emoji use, or friend-like behavior. The most significant change will come in December, when OpenAI plans to roll out more comprehensive age-gating that would permit erotic content for adults who have verified their ages.
- Europe > Ukraine (0.08)
- Oceania > Australia (0.05)
- North America > United States > California (0.05)
- Europe > France (0.05)
- Leisure & Entertainment > Sports (0.74)
- Health & Medicine > Consumer Health (0.50)
- Government > Regional Government > North America Government > United States Government (0.33)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Towards Harmful Erotic Content Detection through Coreference-Driven Contextual Analysis
Okulska, Inez, Wiśnios, Emilia
Adult content detection still poses a great challenge for automation. Existing classifiers primarily focus on distinguishing between erotic and non-erotic texts. However, they often need more nuance in assessing the potential harm. Unfortunately, the content of this nature falls beyond the reach of generative models due to its potentially harmful nature. Ethical restrictions prohibit large language models (LLMs) from analyzing and classifying harmful erotics, let alone generating them to create synthetic datasets for other neural models. In such instances where data is scarce and challenging, a thorough analysis of the structure of such texts rather than a large model may offer a viable solution. Especially given that harmful erotic narratives, despite appearing similar to harmless ones, usually reveal their harmful nature first through contextual information hidden in the non-sexual parts of the narrative. This paper introduces a hybrid neural and rule-based context-aware system that leverages coreference resolution to identify harmful contextual cues in erotic content. Collaborating with professional moderators, we compiled a dataset and developed a classifier capable of distinguishing harmful from non-harmful erotic content. Our hybrid model, tested on Polish text, demonstrates a promising accuracy of 84% and a recall of 80%. Models based on RoBERTa and Longformer without explicit usage of coreference chains achieved significantly weaker results, underscoring the importance of coreference resolution in detecting such nuanced content as harmful erotics. This approach also offers the potential for enhanced visual explainability, supporting moderators in evaluating predictions and taking necessary actions to address harmful content.
- Europe > Poland > Greater Poland Province > Poznań (0.04)
- Europe > Ukraine (0.04)
- Europe > Netherlands (0.04)
- Law (1.00)
- Health & Medicine (0.93)